Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
This study introduces and examines the potential of an AI system to generate health awareness messages. The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case. Using prompt engineering, we generated messages that could be used to raise awareness and compared them to retweeted human-generated messages via computational and human evaluation methods. The system was easy to use and prolific, and computational analyses revealed that the AI-generated messages were on par with human-generated ones in terms of sentiment, reading ease, and semantic content. Also, the human evaluation study showed that AI-generated messages ranked higher in message quality and clarity. We discuss the theoretical, practical, and ethical implications of these results.
translated by 谷歌翻译
Accurate PhotoVoltaic (PV) power generation forecasting is vital for the efficient operation of Smart Grids. The automated design of such accurate forecasting models for individual PV plants includes two challenges: First, information about the PV mounting configuration (i.e. inclination and azimuth angles) is often missing. Second, for new PV plants, the amount of historical data available to train a forecasting model is limited (cold-start problem). We address these two challenges by proposing a new method for day-ahead PV power generation forecasts called AutoPV. AutoPV is a weighted ensemble of forecasting models that represent different PV mounting configurations. This representation is achieved by pre-training each forecasting model on a separate PV plant and by scaling the model's output with the peak power rating of the corresponding PV plant. To tackle the cold-start problem, we initially weight each forecasting model in the ensemble equally. To tackle the problem of missing information about the PV mounting configuration, we use new data that become available during operation to adapt the ensemble weights to minimize the forecasting error. AutoPV is advantageous as the unknown PV mounting configuration is implicitly reflected in the ensemble weights, and only the PV plant's peak power rating is required to re-scale the ensemble's output. AutoPV also allows to represent PV plants with panels distributed on different roofs with varying alignments, as these mounting configurations can be reflected proportionally in the weighting. Additionally, the required computing memory is decoupled when scaling AutoPV to hundreds of PV plants, which is beneficial in Smart Grids with limited computing capabilities. For a real-world data set with 11 PV plants, the accuracy of AutoPV is comparable to a model trained on two years of data and outperforms an incrementally trained model.
translated by 谷歌翻译
Recently, RNN-Transducers have achieved remarkable results on various automatic speech recognition tasks. However, lattice-free sequence discriminative training methods, which obtain superior performance in hybrid modes, are rarely investigated in RNN-Transducers. In this work, we propose three lattice-free training objectives, namely lattice-free maximum mutual information, lattice-free segment-level minimum Bayes risk, and lattice-free minimum Bayes risk, which are used for the final posterior output of the phoneme-based neural transducer with a limited context dependency. Compared to criteria using N-best lists, lattice-free methods eliminate the decoding step for hypotheses generation during training, which leads to more efficient training. Experimental results show that lattice-free methods gain up to 6.5% relative improvement in word error rate compared to a sequence-level cross-entropy trained model. Compared to the N-best-list based minimum Bayes risk objectives, lattice-free methods gain 40% - 70% relative training time speedup with a small degradation in performance.
translated by 谷歌翻译
ASR can be improved by multi-task learning (MTL) with domain enhancing or domain adversarial training, which are two opposite objectives with the aim to increase/decrease domain variance towards domain-aware/agnostic ASR, respectively. In this work, we study how to best apply these two opposite objectives with speaker labels to improve conformer-based ASR. We also propose a novel adaptive gradient reversal layer for stable and effective adversarial training without tuning effort. Detailed analysis and experimental verification are conducted to show the optimal positions in the ASR neural network (NN) to apply speaker enhancing and adversarial training. We also explore their combination for further improvement, achieving the same performance as i-vectors plus adversarial training. Our best speaker-based MTL achieves 7\% relative improvement on the Switchboard Hub5'00 set. We also investigate the effect of such speaker-based MTL w.r.t. cleaner dataset and weaker ASR NN.
translated by 谷歌翻译
Automatic speech recognition (ASR) has been established as a well-performing technique for many scenarios where lots of labeled data is available. Additionally, unsupervised representation learning recently helped to tackle tasks with limited data. Following this, hardware limitations and applications give rise to the question how to efficiently take advantage of large pretrained models and reduce their complexity for downstream tasks. In this work, we study a challenging low resource conversational telephony speech corpus from the medical domain in Vietnamese and German. We show the benefits of using unsupervised techniques beyond simple fine-tuning of large pre-trained models, discuss how to adapt them to a practical telephony task including bandwidth transfer and investigate different data conditions for pre-training and fine-tuning. We outperform the project baselines by 22% relative using pretraining techniques. Further gains of 29% can be achieved by refinements of architecture and training and 6% by adding 0.8 h of in-domain adaptation data.
translated by 谷歌翻译
变异贝叶斯后推理通常需要简化近似值,例如平均场参数化,以确保障碍性。但是,先前的工作已将贝叶斯神经网络的变异平均场近似与小数据集或大型型号相关联。在这项工作中,我们表明,过度参数模型的可能性函数的不变函数有助于这种现象,因为这些不变通过引入离散和/或连续模式来使后验的结构复杂化,而高斯均值均不能很好地近似。特别是,我们表明平均场近似在证据下界限的额外差距与专门建造的后部相比,考虑到已知的不变。重要的是,这种不变差距并不恒定。随着近似值恢复为先验,它消失了。我们首先在线性模型中首先考虑具有单个数据点的线性模型中的翻译不变。我们表明,尽管可以从平均场参数化构建真实的后验,但仅当目标函数考虑不变性差距时才能实现。然后,我们将线性模型的分析转移到神经网络。我们的分析为将来的工作提供了一个框架,以探索解决不变性问题的解决方案。
translated by 谷歌翻译
为了安全操作,机器人必须能够避免在不确定的环境中发生碰撞。现有的不确定性运动计划方法通常会对高斯和障碍几何形状做出保守的假设。尽管视觉感知可以对环境提供更准确的表示,但其用于安全运动计划的使用受到神经网络的固有错误校准的限制以及获得足够数据集的挑战。为了解决这些模仿,我们建议采用经过系统增强数据集训练的深层语义分割网络的合奏,以确保可靠的概率占用信息。为了避免在运动计划中进行保守主义,我们通过基于场景的路径计划方法直接采用了概率感知。速度调度方案被应用于路径上,以确保跟踪不准确的情况。我们证明了系统数据增强与深层合奏结合的有效性以及与最新方法相比的基于方案的计划方法,并在涉及人手的实验中验证了我们的框架。
translated by 谷歌翻译
对象检测神经网络模型需要在高度动态和安全至关重要的环境(例如自动驾驶或机器人技术)中可靠地执行。因此,在意外硬件故障(例如软误差)下验证检测的鲁棒性至关重要,这些故障可能会影响系统感知模块。基于平均精度的标准指标会在对象级别而不是图像级别产生模型漏洞估计。正如我们在本文中所显示的那样,这并不能提供直观或代表性的指标,表明是由基础记忆中的位翻转引起的无声数据损坏的安全性影响,而是导致典型断层诱导危害的过度估计或低估。为了关注与安全相关的实时应用程序,我们提出了一个新的度量IVMOD(图像漏洞测量的对象检测),以基于错误的图像检测(FPS)或假阴性为基于图像的对象检测,以量化漏洞(FNS)对象,结合严重性分析。对几个代表性对象检测模型的评估表明,即使是单个位翻转也可能导致严重的无声数据腐败事件,具有潜在的关键安全性,例如,(大于)生成的100 fps或最多可产生。 90%的真实阳性(TPS)在图像中丢失。此外,在单个卡住的情况下,可能会影响整个图像序列,从而导致暂时持续的幽灵检测,这些检测可能被误认为是实际对象(覆盖了大约83%的图像)。此外,场景中的实际物体被持续遗漏(最多约有64%的TPS)。我们的工作建立了对此类关键工作负载与硬件故障的安全相关脆弱性的详细理解。
translated by 谷歌翻译